66 research outputs found

    The Role of Pitch and Timbre in Voice Gender Categorization

    Get PDF
    Voice gender perception can be thought of as a mixture of low-level perceptual feature extraction and higher-level cognitive processes. Although it seems apparent that voice gender perception would rely on low-level pitch analysis, many lines of research suggest that this is not the case. Indeed, voice gender perception has been shown to rely on timbre perception and to be categorical, i.e., to depend on accessing a gender model or representation. Here, we used a unique combination of acoustic stimulus manipulation and mathematical modeling of human categorization performances to determine the relative contribution of pitch and timbre to this process. Contrary to the idea that voice gender perception relies on timber only, we demonstrate that voice gender categorization can be performed using pitch only but more importantly that pitch is used only when timber information is ambiguous (i.e., for more androgynous voices)

    Improving standards in brain-behavior correlation analyses

    Get PDF
    Associations between two variables, for instance between brain and behavioral measurements, are often studied using correlations, and in particular Pearson correlation. However, Pearson correlation is not robust: outliers can introduce false correlations or mask existing ones. These problems are exacerbated in brain imaging by a widespread lack of control for multiple comparisons, and several issues with data interpretations. We illustrate these important problems associated with brain-behavior correlations, drawing examples from published articles. We make several propositions to alleviate these problems

    Too Rich To Keep: Sharing the wealth of clinical data to enhance patient treatments

    Get PDF
    This 7-minute presentation was made by Cyril Pernet of Edinburgh Imaging, University of Edinburgh, as part of the 24x7 'Making a Difference with Data' session at RepoFringe 2016

    Misconceptions in the use of the General Linear Model applied to functional MRI:a tutorial for junior neuro-imagers

    Get PDF
    This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (i) model parameterization (modelling baseline or null events) and scaling of the design matrix; (ii) hemodynamic modelling using basis functions, and (iii) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why 'baseline' should not be modelled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the haemodynamic model (haemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analysis and give some recommendations

    Test-retest reliability of structural brain networks from diffusion MRI

    Get PDF
    Structural brain networks constructed from diffusion MRI (dMRI) and tractography have been demonstrated in healthy volunteers and more recently in various disorders affecting brain connectivity. However, few studies have addressed the reproducibility of the resulting networks. We measured the test–retest properties of such networks by varying several factors affecting network construction using ten healthy volunteers who underwent a dMRI protocol at 1.5 T on two separate occasions. Each T1-weighted brain was parcellated into 84 regions-of-interest and network connections were identified using dMRI and two alternative tractography algorithms, two alternative seeding strategies, a white matter waypoint constraint and three alternative network weightings. In each case, four common graph-theoretic measures were obtained. Network properties were assessed both node-wise and per network in terms of the intraclass correlation coefficient (ICC) and by comparing within- and between-subject differences. Our findings suggest that test–retest performance was improved when: 1) seeding from white matter, rather than grey; and 2) using probabilistic tractography with a two-fibre model and sufficient streamlines, rather than deterministic tensor tractography. In terms of network weighting, a measure of streamline density produced better test–retest performance than tract-averaged diffusion anisotropy, although it remains unclear which is a more accurate representation of the underlying connectivity. For the best performing configuration, the global within-subject differences were between 3.2% and 11.9% with ICCs between 0.62 and 0.76. The mean nodal within-subject differences were between 5.2% and 24.2% with mean ICCs between 0.46 and 0.62. For 83.3% (70/84) of nodes, the within-subject differences were smaller than between-subject differences. Overall, these findings suggest that whilst current techniques produce networks capable of characterising the genuine between-subject differences in connectivity, future work must be undertaken to improve network reliability

    The percentile bootstrap: a primer with step-by-step instructions in R.

    Get PDF
    The percentile bootstrap is the Swiss Army knife of statistics: It is a nonparametric method based on data-driven simulations. It can be applied to many statistical problems, as a substitute to standard parametric approaches, or in situations for which parametric methods do not exist. In this Tutorial, we cover R code to implement the percentile bootstrap to make inferences about central tendency (e.g., means and trimmed means) and spread in a one-sample example and in an example comparing two independent groups. For each example, we explain how to derive a bootstrap distribution and how to get a confidence interval and a p value from that distribution. We also demonstrate how to run a simulation to assess the behavior of the bootstrap. For some purposes, such as making inferences about the mean, the bootstrap performs poorly. But for other purposes, it is the only known method that works well over a broad range of situations. More broadly, combining the percentile bootstrap with robust estimators (i.e., estimators that are not overly sensitive to outliers) can help users gain a deeper understanding of their data than they would using conventional methods

    Mindfulness related changes in grey matter: a systematic review and meta‐analysis

    Get PDF
    International audienceAbstract Knowing target regions undergoing strfuncti changes caused by behavioural interventions is paramount in evaluating the effectiveness of such practices. Here, using a systematic review approach, we identified 25 peer-reviewed magnetic resonance imaging (MRI) studies demonstrating grey matter changes related to mindfulness meditation. An activation likelihood estimation (ALE) analysis (n = 16) revealed the right anterior ventral insula as the only significant region with consistent effect across studies, whilst an additional functional connectivity analysis indicates that both left and right insulae, and the anterior cingulate gyrus with adjacent paracingulate gyri should also be considered in future studies. Statistical meta-analyses suggest medium to strong effect sizes from Cohen’s d ~ 0.8 in the right insula to ~ 1 using maxima across the whole brain. The systematic review revealed design issues with selection, information, attrition and confirmation biases, in addition to weak statistical power. In conclusion, our analyses show that mindfulness meditation practice does induce grey matter changes but also that improvements in methodology are needed to establish mindfulness as a therapeutic intervention

    A controlled comparison of thickness, volume and surface areas from multiple cortical parcellation packages

    Get PDF
    Abstract Background Cortical parcellation is an essential neuroimaging tool for identifying and characterizing morphometric and connectivity brain changes occurring with age and disease. A variety of software packages have been developed for parcellating the brain’s cortical surface into a variable number of regions but interpackage differences can undermine reproducibility. Using a ground truth dataset (Edinburgh_NIH10), we investigated such differences for grey matter thickness (GMth), grey matter volume (GMvol) and white matter surface area (WMsa) for the superior frontal gyrus (SFG), supramarginal gyrus (SMG), and cingulate gyrus (CG) from 4 parcellation protocols as implemented in the FreeSurfer, BrainSuite, and BrainGyrusMapping (BGM) software packages. Results Corresponding gyral definitions and morphometry approaches were not identical across the packages. As expected, there were differences in the bordering landmarks of each gyrus as well as in the manner in which variability was addressed. Rostral and caudal SFG and SMG boundaries differed, and in the event of a double CG occurrence, its upper fold was not always addressed. This led to a knock-on effect that was visible at the neighbouring gyri (e.g., knock-on effect at the SFG following CG definition) as well as gyral morphometric measurements of the affected gyri. Statistical analysis showed that the most consistent approaches were FreeSurfer’s Desikan-Killiany-Tourville (DKT) protocol for GMth and BrainGyrusMapping for GMvol. Package consistency varied for WMsa, depending on the region of interest. Conclusions Given the significance and implications that a parcellation protocol will have on the classification, and sometimes treatment, of subjects, it is essential to select the protocol which accurately represents their regions of interest and corresponding morphometrics, while embracing cortical variability
    corecore